human-ai interaction
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Oceania > Australia > Western Australia (0.04)
- (2 more...)
Governing the rise of interactive AI will require behavioral insights
AI is no longer just a translator or image recognizer. Today, we engage with systems that remember our preferences, proactively manage our calendars, and even provide emotional support. They build ongoing bonds with users. They change their behavior based on our habits. They don't just wait for commands; they suggest next steps.
Interview with Mario Mirabile: trust in multi-agent systems
In a new series of interviews, we're meeting some of the PhD students that were selected to take part in the Doctoral Consortium at the European Conference on Artificial Intelligence (ECAI 2025) . During the conference in Bologna, we caught up with Mario Mirabile who is studying for his PhD in trustworthy AI and multi-agent systems at the University of Santiago de Compostela and is a Research Fellow in human-AI interaction at the University of Bologna. Mario, along with co-authors Frida Hartman and Michele Dusi, was also the winner of the ECAI-2025 Diversity & Inclusion Competition, for work entitled . This award was presented at the closing ceremony of the conference. Could you start by giving us an introduction to the topic you are working on?
- Europe > Italy > Emilia-Romagna > Metropolitan City of Bologna > Bologna (0.46)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.26)
- Europe > Spain > Galicia > A Coruña Province > Santiago de Compostela (0.26)
- Europe > Italy > Sicily (0.05)
Mutual Wanting in Human--AI Interaction: Empirical Evidence from Large-Scale Analysis of GPT Model Transitions
The rapid evolution of large language models (LLMs) creates complex bidirectional expectations between users and AI systems that are poorly understood. We introduce the concept of "mutual wanting" to analyze these expectations during major model transitions. Through analysis of user comments from major AI forums and controlled experiments across multiple OpenAI models, we provide the first large-scale empirical validation of bidirectional desire dynamics in human-AI interaction. Our findings reveal that nearly half of users employ anthropomorphic language, trust significantly exceeds betrayal language, and users cluster into distinct "mutual wanting" types. We identify measurable expectation violation patterns and quantify the expectation-reality gap following major model releases. Using advanced NLP techniques including dual-algorithm topic modeling and multi-dimensional feature extraction, we develop the Mutual Wanting Alignment Framework (M-WAF) with practical applications for proactive user experience management and AI system design. These findings establish mutual wanting as a measurable phenomenon with clear implications for building more trustworthy and relationally-aware AI systems.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Oceania > Australia > Western Australia (0.04)
- (2 more...)
A Conceptual Framework for AI-based Decision Systems in Critical Infrastructures
Leyli-abadi, Milad, Bessa, Ricardo J., Viebahn, Jan, Boos, Daniel, Borst, Clark, Castagna, Alberto, Chavarriaga, Ricardo, Hassouna, Mohamed, Lemetayer, Bruno, Leto, Giulia, Marot, Antoine, Meddeb, Maroua, Meyer, Manuel, Schiaffonati, Viola, Schneider, Manuel, Waefler, Toni
Abstract-- The interaction between humans and AI in safety-critical systems presents a unique set of challenges that re main partially addressed by existing frameworks. These challen ges stem from the complex interplay of requirements for transparency, trust, and explainability, coupled with the neces sity for robust and safe decision-making. A framework that holistic ally integrates human and AI capabilities while addressing thes e concerns is notably required, bridging the critical gaps in designing, deploying, and maintaining safe and effective sys tems. This paper proposes a holistic conceptual framework for cri tical infrastructures by adopting an interdisciplinary approac h. It integrates traditionally distinct fields such as mathemati cs, decision theory, computer science, philosophy, psycholog y, and cognitive engineering and draws on specialized engineerin g domains, particularly energy, mobility, and aeronautics. Its flexibility is further demonstrated through a case study on power grid management. Artificial Intelligence (AI) is showing high potential to transform the management of critical infrastructures [1], tackling pressing challenges like climate change and the rising demand for energy and mobility systems while advancing strategic objectives such as energy transition and digi tal transformation. On the other hand, integrating AI in critic al sectors introduces significant challenges, many of which ar e already being addressed by emerging regulatory frameworks, such as the European Union AI Act. These frameworks emphasize the importance of safety, transparency, and adhe r-ence to ethical standards and principles to mitigate a wide range of risks, including technical, social, and environme ntal hazards associated with deploying AI in high-risk domains. Another key challenge lies in fostering effective human-AI collaboration.
- North America > Canada > Alberta > Census Division No. 11 > Edmonton Metropolitan Region > Edmonton (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Europe > Switzerland > Geneva > Geneva (0.04)
- (3 more...)
- Transportation > Air (1.00)
- Energy > Power Industry (1.00)
- Transportation > Infrastructure & Services (0.95)
- Government > Regional Government > Europe Government (0.48)
Human-AI collaboration or obedient and often clueless AI in instruct, serve, repeat dynamics?
Saqr, Mohammed, Misiejuk, Kamila, López-Pernas, Sonsoles
While research on human-AI collaboration exists, it mainly examined language learning and used traditional counting methods with little attention to evolution and dynamics of collaboration on cognitively demanding tasks. This study examines human-AI interactions while solving a complex problem. Student-AI interactions were qualitatively coded and analyzed with transition network analysis, sequence analysis and partial correlation networks as well as comparison of frequencies using chi-square and Person-residual shaded Mosaic plots to map interaction patterns, their evolution, and their relationship to problem complexity and student performance. Findings reveal a dominant Instructive pattern with interactions characterized by iterative ordering rather than collaborative negotiation. Oftentimes, students engaged in long threads that showed misalignment between their prompts and AI output that exemplified a lack of synergy that challenges the prevailing assumptions about LLMs as collaborative partners. We also found no significant correlations between assignment complexity, prompt length, and student grades suggesting a lack of cognitive depth, or effect of problem difficulty. Our study indicates that the current LLMs, optimized for instruction-following rather than cognitive partnership, compound their capability to act as cognitively stimulating or aligned collaborators. Implications for designing AI systems that prioritize cognitive alignment and collaboration are discussed.
- North America > United States (0.04)
- Europe > Finland > North Karelia > Joensuu (0.04)
- South America > Uruguay > Maldonado > Maldonado (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Instructional Material (1.00)
- Education > Educational Setting > Higher Education (0.68)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.67)
On the Same Page: Dimensions of Perceived Shared Understanding in Human-AI Interaction
Shared understanding plays a key role in the effective communication in and performance of human-human interactions. With the increasingly common integration of AI into human contexts, the future of personal and workplace interactions will see a greater prevalence of human-AI interaction (HAII) in which the perception of shared understanding (PSU) will be important. Existing literature has addressed the processes and effects of PSU in human-human interactions, but the construal remains underexplored in HAII. To better understand PSU in that context, we conducted an online survey to collect user reflections on interactions with a large-language model when its understanding of a situation was thought to be similar or different from the participant's. Through inductive thematic analysis, we identified eight dimensions comprising PSU in human-AI interactions. The descriptive framework we derive supports an operational characterization of PSU and serves as a springboard for future work into the phenomenon.
- Asia > Macao (0.04)
- Oceania > Australia (0.04)
- North America > United States > New York > Onondaga County > Syracuse (0.04)
- (19 more...)
- Research Report > New Finding (0.46)
- Research Report > Experimental Study (0.46)
Human-AI Interaction and User Satisfaction: Empirical Evidence from Online Reviews of AI Products
Human-AI Interaction (HAI) guidelines and design principles have become increasingly important in both industry and academia to guide the development of AI systems that align with user needs and expectations. However, large-scale empirical evidence on how HAI principles shape user satisfaction in practice remains limited. This study addresses that gap by analyzing over 100,000 user reviews of AI-related products from G2.com, a leading review platform for business software and services. Based on widely adopted industry guidelines, we identify seven core HAI dimensions and examine their coverage and sentiment within the reviews. We find that the sentiment on four HAI dimensions-adaptability, customization, error recovery, and security-is positively associated with overall user satisfaction. Moreover, we show that engagement with HAI dimensions varies by professional background: Users with technical job roles are more likely to discuss system-focused aspects, such as reliability, while non-technical users emphasize interaction-focused features like customization and feedback. Interestingly, the relationship between HAI sentiment and overall satisfaction is not moderated by job role, suggesting that once an HAI dimension has been identified by users, its effect on satisfaction is consistent across job roles.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Experimental Exploration: Investigating Cooperative Interaction Behavior Between Humans and Large Language Model Agents
Jiang, Guanxuan, Wang, Yuyang, Hui, Pan
With the rise of large language models (LLMs), AI agents as autonomous decision-makers present significant opportunities and challenges for human-AI cooperation. While many studies have explored human cooperation with AI as tools, the role of LLM-augmented autonomous agents in competitive-cooperative interactions remains under-examined. This study investigates human cooperative behavior by engaging 30 participants who interacted with LLM agents exhibiting different characteristics (purported human, purported rule-based AI agent, and LLM agent) in repeated Prisoner's Dilemma games. Findings show significant differences in cooperative behavior based on the agents' purported characteristics and the interaction effect of participants' genders and purported characteristics. We also analyzed human response patterns, including game completion time, proactive favorable behavior, and acceptance of repair efforts. These insights offer a new perspective on human interactions with LLM agents in competitive cooperation contexts, such as virtual avatars or future physical entities. The study underscores the importance of understanding human biases toward AI agents and how observed behaviors can influence future human-AI cooperation dynamics.
- Asia > China (0.29)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Leisure & Entertainment > Games (1.00)
- Health & Medicine (1.00)